16 research outputs found

    Facial soft biometrics for recognition in the wild: recent works, annotation and COTS evaluation

    Full text link
    The role of soft biometrics to enhance person recognition systems in unconstrained scenarios has not been extensively studied. Here, we explore the utility of the following modalities: gender, ethnicity, age, glasses, beard and moustache. We consider two assumptions: i) manual estimation of soft biometrics, and ii) automatic estimation from two Commercial Off-The-Shelf systems (COTS). All experiments are reported using the LFW database. First, we study the discrimination capabilities of soft biometrics standalone. Then, experiments are carried out fusing soft biometrics with two state-of-the-art face recognition systems based on deep learning. We observe that soft biometrics is a valuable complement to the face modality in unconstrained scenarios, with relative improvements up to 40%=15% in the verification performance when using manual/automatic soft biometrics estimation. Results are reproducible as we make public our manual annotations and COTS outputs of soft biometrics over LFW, as well as the face recognition scoresThis work was funded by Spanish Guardia Civil and project CogniMetrics (TEC2015-70627-R) from MINECO/FEDE

    Image-based Gender Estimation from Body and Face across Distances

    Get PDF
    International audienceGender estimation has received increased attention due to its use in a number of pertinent security and commercial applications. Automated gender estimation algorithms are mainly based on extracting representative features from face images. In this work we study gender estimation based on information deduced jointly from face and body, extracted from single-shot images. The approach addresses challenging settings such as low-resolution-images, as well as settings when faces are occluded. Specifically the face-based features include local binary patterns (LBP) and scale-invariant feature transform (SIFT) features, projected into a PCA space. The features of the novel body-based algorithm proposed in this work include continuous shape information extracted from body silhouettes and texture information retained by HOG descriptors. Support Vector Machines (SVMs) are used for classification for body and face features. We conduct experiments on images extracted from video-sequences of the Multi-Biometric Tunnel database, emphasizing on three distance-settings: close, medium and far, ranging from full body exposure (far setting) to head and shoulders exposure (close setting). The experiments suggest that while face-based gender estimation performs best in the close-distance-setting, body-based gender estimation performs best when a large part of the body is visible. Finally we present two score-level-fusion schemes of face and body-based features, outperforming the two individual modalities in most cases

    Graphene and graphene oxide induce ROS production in human HaCaT skin keratinocytes: The role of xanthine oxidase and NADH dehydrogenase

    Get PDF
    The extraordinary physicochemical properties of graphene-based nanomaterials (GBNs) make them promising tools in nanotechnology and biomedicine. Considering the skin contact as one of the most feasible exposure routes to GBNs, the mechanism of toxicity of two GBNs (few-layer-graphene, FLG, and graphene oxide, GO) towards human HaCaT skin keratinocytes was investigated. Both materials induced a significant mitochondrial membrane depolarization: 72 h cell exposure to 100 \u3bcg mL 12 1 FLG or GO increased mitochondrial depolarization by 44% and 56%, respectively, while the positive control valinomycin (0.1 \u3bcg mL 121) increased mitochondrial depolarization by 48%. Since the effect was not prevented by cyclosporine-A, it appears to be unrelated to mitochondrial transition pore opening. By contrast, it seems to be mediated by reactive oxygen species (ROS) production: FLG and GO induced time- and concentration- dependent cellular ROS production, significant already at the concentration of 0.4 \u3bcg mL 121 after 24 h exposure. Among a panel of specific inhibitors of the major ROS-producing enzymes, diphenyliodonium, rotenone and allopurinol significantly reverted or even abolished FLG- or GO-induced ROS production. Intriguingly, the same inhibitors also significantly reduced FLG- or GO-induced mitochondrial depolarization and cytotoxicity. This study shows that FLG and GO induce a cytotoxic effect due to a sustained mitochondrial depolarization. This seems to be mediated by a significant cellular ROS production, caused by the activation of flavoprotein-based oxidative enzymes, such as NADH dehydrogenase and xanthine oxidase

    The role of xanthine oxidase and NADH dehydrogenase.

    Get PDF
    The extraordinary physicochemical properties of graphene-based nanomaterials (GBNs) make them promising tools in nanotechnology and biomedicine. Considering the skin contact as one of the most feasible exposure routes to GBNs, the mechanism of toxicity of two GBNs (few-layer-graphene, FLG, and graphene oxide, GO) towards human HaCaT skin keratinocytes was investigated. Both materials induced a significant mitochondrial membrane depolarization: 72 h cell exposure to 100 µg mL-1 FLG or GO increased mitochondrial depolarization by 44% and 56%, respectively, while the positive control valinomycin (0.1 µg mL-1) increased mitochondrial depolarization by 48%. Since the effect was not prevented by cyclosporine-A, it appears to be unrelated to mitochondrial transition pore opening. By contrast, it seems to be mediated by reactive oxygen species (ROS) production: FLG and GO induced time- and concentration-dependent cellular ROS production, significant already at the concentration of 0.4 µg mL-1 after 24 h exposure. Among a panel of specific inhibitors of the major ROS-producing enzymes, diphenyliodonium, rotenone and allopurinol significantly reverted or even abolished FLG- or GO-induced ROS production. Intriguingly, the same inhibitors also significantly reduced FLG- or GO-induced mitochondrial depolarization and cytotoxicity. This study shows that FLG and GO induce a cytotoxic effect due to a sustained mitochondrial depolarization. This seems to be mediated by a significant cellular ROS production, caused by the activation of flavoprotein-based oxidative enzymes, such as NADH dehydrogenase and xanthine oxidase

    Image-based gender estimation from body and face across distances

    No full text

    Enhanced Self-Perception in Mixed Reality: Egocentric Arm Segmentation and Database with Automatic Labeling

    Full text link
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this study, we focus on the egocentric segmentation of arms to improve self-perception in Augmented Virtuality (AV). The main contributions of this work are: ii ) a comprehensive survey of segmentation algorithms for AV; iiii ) an Egocentric Arm Segmentation Dataset (EgoArm), composed of more than 10, 000 images, demographically inclusive (variations of skin color, and gender), and open for research purposes. We also provide all details required for the automated generation of groundtruth and semi-synthetic images; iiiiii ) the proposal of a deep learning network to segment arms in AV; iviv ) a detailed quantitative and qualitative evaluation to showcase the usefulness of the deep network and EgoArm dataset, reporting results on different real egocentric hand datasets, including GTEA Gaze+, EDSH, EgoHands, Ego Youtube Hands, THU-Read, TEgO, FPAB, and Ego Gesture, which allow for direct comparisons with existing approaches using color or depth. Results confirm the suitability of the EgoArm dataset for this task, achieving improvements up to 40% with respect to the baseline network, depending on the particular dataset. Results also suggest that, while approaches based on color or depth can work under controlled conditions (lack of occlusion, uniform lighting, only objects of interest in the near range, controlled background, etc.), deep learning is more robust in real AV application

    Enhanced Self-Perception in Mixed Reality: Egocentric Arm Segmentation and Database With Automatic Labeling

    No full text
    © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this study, we focus on the egocentric segmentation of arms to improve self-perception in Augmented Virtuality (AV). The main contributions of this work are: ii ) a comprehensive survey of segmentation algorithms for AV; iiii ) an Egocentric Arm Segmentation Dataset (EgoArm), composed of more than 10, 000 images, demographically inclusive (variations of skin color, and gender), and open for research purposes. We also provide all details required for the automated generation of groundtruth and semi-synthetic images; iiiiii ) the proposal of a deep learning network to segment arms in AV; iviv ) a detailed quantitative and qualitative evaluation to showcase the usefulness of the deep network and EgoArm dataset, reporting results on different real egocentric hand datasets, including GTEA Gaze+, EDSH, EgoHands, Ego Youtube Hands, THU-Read, TEgO, FPAB, and Ego Gesture, which allow for direct comparisons with existing approaches using color or depth. Results confirm the suitability of the EgoArm dataset for this task, achieving improvements up to 40% with respect to the baseline network, depending on the particular dataset. Results also suggest that, while approaches based on color or depth can work under controlled conditions (lack of occlusion, uniform lighting, only objects of interest in the near range, controlled background, etc.), deep learning is more robust in real AV application

    A Survey of Super-Resolution in Iris Biometrics With Evaluation of Dictionary-Learning

    No full text
    The lack of resolution has a negative impact on the performance of image-based biometrics. While many generic super-resolution methods have been proposed to restore low-resolution images, they usually aim to enhance their visual appearance. However, an overall visual enhancement of biometric images does not necessarily correlate with a better recognition performance. Reconstruction approaches thus need to incorporate the specific information from the target biometric modality to effectively improve recognition performance. This paper presents a comprehensive survey of iris super-resolution approaches proposed in the literature. We have also adapted an eigen-patches’ reconstruction method based on the principal component analysis eigen-transformation of local image patches. The structure of the iris is exploited by building a patch-position-dependent dictionary. In addition, image patches are restored separately, having their own reconstruction weights. This allows the solution to be locally optimized, helping to preserve local information. To evaluate the algorithm, we degraded the high-resolution images from the CASIA Interval V3 database. Different restorations were considered, with 15×1515\times 15 pixels being the smallest resolution evaluated. To the best of our knowledge, this is the smallest resolutions employed in the literature. The experimental framework is complemented with six publicly available iris comparators that were used to carry out biometric verification and identification experiments. The experimental results show that the proposed method significantly outperforms both the bilinear and bicubic interpolations at a very low resolution. The performance of a number of comparators attains an impressive equal error rate as low as 5% and a Top-1 accuracy of 77%–84% when considering the iris images of only 15×1515 \times 15 pixels. These results clearly demonstrate the benefit of using trained super-resolution techniques to improve the quality of iris images prior to matching.peer-reviewe
    corecore